To answer the above question, first we must answer the question “What is Tagpro?” Tagpro is an online, capture the flag (CTF) game generally played between two teams of four players on a variety of tile-based maps. Each player is represented by a two-dimensional, paint-filled ball that rolls around the map using only cardinal direction keypresses (up/down/left/right) as input. In the competitive meta, teams typically consist of two primary offenders and two primary defenders. The main objective of the game for a team is to grab the other team’s flag and bring it safely back to their own flag without getting tagged, with the imposition that teams can only score when their own flag is also safe in base. Offenders thus focus their efforts on grabbing the other team’s flag, holding the flag and staying alive until an opportunity arises, and then pushing into their own base to attempt to get a capture. Other duties include helping their defense tag or contain the opposing flag carrier, fighting for powerups, blocking enemies to assist their teammates, and getting ‘regrab’. Regrab refers to the strategy of having one offender wait in the opponent’s base when their partner already has the opponent’s flag, so that if the offensive partner is popped, the other offender will instantly grab the flag when it teleports back to the opponent’s base.
Defenders will focus primarily on keeping their team’s flag secure on the flag tile, and if the other team manages to grab, they will be the ones primarily responsible for chasing the flag carrier and tagging him to return the flag; when a flag carrier is tagged, their ball pops and the flag is teleported back to its home tile in base. Other duties for defenders are fighting for powerups, blocking the other team so the flag carrier can come safely into base, and playing ‘anti-re’. Anti-re is a strategy developed to counter regrab involveing a player guarding the empty flag tile in their own base. The point of this maneuver is that once the enemy flag carrier dies and the flag teleports back, there is a player to defend the flag from the other team’s regrab.
Now that I’ve explained what Tagpro itself is, we can revisit the initial question: What is MLTP? MLTP is the premiere level of competitive Tagpro in the North American scene; the acronym stands for ‘Major League TagPro’. This data visualization project will showcase some of the primary metrics used to evaluate MLTP players and how the stats have changed across modern competitive seasons. Season 10 will be considered the first modern season for the purpose of this exploration, since it was the first season that anti-re saw widespread use in the competitive meta. Additionally, Season 10 was the first stats started being automatically captured and then imported to TagproLeague instead of in Excel documents. With the advent of that technology came many more statistics and metrics available that simply couldn’t be recorded previously.
The main statistics used to holistically judge the individual performance of an offender are captures, grabs, scoring percentage, powerups, and hold. Captures is the number of times an offender scored, grabs is the number of times an offender picked up the flag, scoring percentage is the percentage of grabs that result in a capture: \((\frac{Captures}{Grabs})*100\), powerups is merely the number of powerups a player picked up, and hold is the duration of time an offender has the flag in their possession.
Defenders are holistically judged on prevent, tags, powerups, and kill/death ratio. Prevent is the duration of time the defender protected the flag from being grabbed, tags are the number of times a player kills a player on the other team, and (unsurprisingly) kill/death ratio is just: \(\frac{Tags}{Pops}\), the number of tags divided by the number of pops.
Finally, team success is typically measured using either win/loss/tie record or by capture differential. I will use capture differential for the purposes of this data visualization, as a quantitative measure of success is more useful in this case than a qualitative one, since winning by one capture is a much smaller indicator of success than winning by ten captures.
First, we consider a handful of interactive scatterplots comparing some of the defensive metrics to each other. In each plot, the size of the data point depicts the score differential per minute for that observation. Hovering over each observation will provide the player’s name and the abscissa and ordinate values. Each plot also has a slider that controls which MLTP season is displayed on the graph; hitting ’Play" will have the plot cycle autonomously through the seasons for a full rotation, and manually clicking on a season in the slider will shift the plot directly to that season.
First, we compare tags per minute to prevent per minute, colored by the main position of the player in each observation. Comparing the two groups, defenders and offenders, gives a schema for what kind of stats a player should be getting for their position:
Next, we remove the offenders from the mix to only see the plot for the defenders. Although seeing the positional differences was useful, reducing the domain and and range of the axes provides a better picture of the subset of players for which the stats are actually important:
Now we employ the same technique for hold per minute compared to grabs per minute, again presenting both positions separated by color, and then a subset including only the offenders:
Beeswarm plots are a magnificent way to visualize distributions of MLTP stats. Beeswarms show the individual observations as unique data points, unlike other distribution visualization techniques which often encode the data abstractly. For example, boxplots uses a box and whiskers to encapsulate all of the non-outlier observations and rely on quartiles and summary statistics. Histograms employ binning to group the data points into bars to display, again obscuring the individual observations. One of the main limitations of beeswarm plots is that for increasingly large datasets, it becomes more and more difficult to show each observation. Modern MLTP seasons typically have somewhere between 32 and 64 majors starters, depending on the number of teams. which falls into the optimal range for the number of observations a beeswarm plot contains.
To supplement each beeswarm plot, I have also included an interactive boxplot (a.k.a box-and-whiskers plot) with the individual data points slightly offset from the main body of the boxplot. Hovering over the box for each season will display the summary statistics for that box: minimum, maximum, first quartile, third quartile, median, and upper/lower fence if applicable. Hovering over each data point will give the player’s name, the season, and the ordinate value. Toggling the colored boxes in the legend allows the box for each season to be removed or returned to the plot, and double-clicking on any of the boxes will display that season in isolation and remove all other seasons from the boxplot. Double-clicking on the box again will revert the plot back to its default display.
We can see that defenders were not very adept at keeping the flag in base back in Season 10, with a median prevent per minute (PPM) of only 11.76, the highest was only 16.59 PPM from YoungSinatra, and the lowest was an abysmal 6.02 PPM from Gem. The following season was a little bit cleaner at the bottom end, with a minimum of 9.36 PPM, which raised the median up to 13.33 PPM. However, the maximum of 16.24 PPM from Bal McCartny suggests that the top defenders didn’t manage to make any meaningful progress in shutting down offenses in S11 compared to S10.
Season 12 marked the beginning of the Preventaissance, with the Meme*Team defensive duo of Syniikal and YoungSinatra (xXw3Edl0rdXx) shattering all previous prevent records by staggering margins, with Syniikal having 19.93 PPM and YoungSinatra having 20.19 PPM, the first player in MLTP to ever break the 20+ PPM threshold. No player in any previous season had ever crossed even crossed 17 ppm, yet S12 saw a total of four players do so, with SIDE and EASHY coming in at 17.52 PPM and 17.33 PPM respectively. The median also saw another big jump, from 13.33 PPM up to 14.58 PPM.
S13 saw defensive prowess continue to fluorish, with the median prevent continuing to climb up to 15.75 PPM. Syniikal finishing with a solid 20.05 PPM placed him as the second player to ever cross the 20+ PPM barrier, and seven other players would join him in the 17+ club that season: C Bivvey, alchemist, BALLDON’TLIE, iAaronK, aardvark, siDe, and Eashy. S14 brought a period of stagnation for most of the league, with the median moving almost imperceptibly up to 15.90 PPM. Syniikal achieved a mind-blowing 25.14 PPM, over 9 PPM greater than the median, becoming the first and almost certainly the only player to ever cross the 25+ PPM threshold.
S15 and S16 both saw miniscule increases in the median PPM, again with Syniikal leading the pack in both seasons. S16 saw HERB become the third member to join the 20+ PPM crew. S10-S16 saw monotonic increases in the median PPM, but a major change in S17 would shake the game to its very core. S17 saw the game’s creator, known in-game as ‘LuckySpammer’, remove every extant Tagpro server and replace them with new ones from a different provider. For many players, the new servers were a direct downgrade and complaints of bad ping and server instability plagued the league like never before. S17 marked the first season in the modern era where the median PPM had fallen from the previous season. This didn’t stop BigBird (or Syniikal) however, as BigBird still managed to accrue 20.69 PPM, becoming the 4th player to join the 20+ PPM club, and the only one who managed it without being Syniikal’s partner. The median prevent continued to nosedive in S18 all the way down to 14.58 PPM, lower than it had been since S11. S19 saw a very slight recovery in the median prevent, but the distribution looks fairly similar to the S18 distribution. Syniikal, playing as TomatoFarmer, once again topped the prevent leaderboard with his seventh straight season of not only being first in prevent, but having over 20 PPM. Only time will tell if the defenders will be able to shore up their collective in-base acumen or if the offenders have ultimately begun to turn the tide in the North American competitive scene.
I’ll keep this one short and sweet. S10 saw three players cross the 3+ K/D ratio barrier:
I have created beeswarms and boxplots for some of the stat categories that are rarely talked about in competitive play, but may still be of interest. Unlike the previous sections, I will merely display the distributions and eschew any explanations.
## R version 3.6.1 (2019-07-05)
## Platform: x86_64-w64-mingw32/x64 (64-bit)
## Running under: Windows 10 x64 (build 17763)
##
## Matrix products: default
##
## locale:
## [1] LC_COLLATE=English_United States.1252
## [2] LC_CTYPE=English_United States.1252
## [3] LC_MONETARY=English_United States.1252
## [4] LC_NUMERIC=C
## [5] LC_TIME=English_United States.1252
##
## attached base packages:
## [1] stats graphics grDevices utils datasets methods base
##
## other attached packages:
## [1] beeswarm_0.2.3 pls_2.7-2 factoextra_1.0.6
## [4] data.table_1.12.2 plotly_4.9.1 ggfortify_0.4.8
## [7] forcats_0.4.0 stringr_1.4.0 dplyr_0.8.3
## [10] purrr_0.3.2 readr_1.3.1 tidyr_1.0.0
## [13] tibble_2.1.3 ggplot2_3.2.1 tidyverse_1.2.1
##
## loaded via a namespace (and not attached):
## [1] ggrepel_0.8.1 Rcpp_1.0.2 lubridate_1.7.4
## [4] lattice_0.20-38 assertthat_0.2.1 zeallot_0.1.0
## [7] digest_0.6.21 mime_0.7 R6_2.4.0
## [10] cellranger_1.1.0 backports_1.1.4 evaluate_0.14
## [13] httr_1.4.1 pillar_1.4.2 rlang_0.4.0
## [16] lazyeval_0.2.2 readxl_1.3.1 rstudioapi_0.10
## [19] rmarkdown_1.15 htmlwidgets_1.3 munsell_0.5.0
## [22] shiny_1.3.2 broom_0.5.2 compiler_3.6.1
## [25] httpuv_1.5.2 modelr_0.1.5 xfun_0.9
## [28] pkgconfig_2.0.3 htmltools_0.3.6 tidyselect_0.2.5
## [31] gridExtra_2.3 viridisLite_0.3.0 crayon_1.3.4
## [34] withr_2.1.2 later_0.8.0 grid_3.6.1
## [37] nlme_3.1-140 jsonlite_1.6 xtable_1.8-4
## [40] gtable_0.3.0 lifecycle_0.1.0 magrittr_1.5
## [43] scales_1.0.0 cli_1.1.0 stringi_1.4.3
## [46] promises_1.0.1 xml2_1.2.2 generics_0.0.2
## [49] vctrs_0.2.0 RColorBrewer_1.1-2 tools_3.6.1
## [52] glue_1.3.1 hms_0.5.1 crosstalk_1.0.0
## [55] yaml_2.2.0 colorspace_1.4-1 rvest_0.3.4
## [58] knitr_1.25 haven_2.1.1